AI in Mediation: Balancing Innovation with Empathy and Fairness

insights - 14 November 2024

A practical overview of the success and pitfalls of using AI in mediation.

Over the last few decades, online dispute resolution (ODR) has been more commonly used. AI has made significant strides in various fields, and one of its emerging application is in mediation. However, the success of platforms such as NextLevel Mediation and SmartSettle One (with less complex mediation cases), have raised questions regarding the role of AI in mediation going forward.

While human mediators remain a valuable and necessary asset, the use of AI has sparked a debate of whether the technology is best suited in a supportive capacity (as a ‘fourth party’ mediator), or whether it can substitute the role of a human mediator.


AI in a Supportive Capacity


AI’s ‘role in mediation raises unique questions about the balance between human empathy and machine efficiency.’ – Robert Bergman CEO of Next Level, Online Dispute Resolution (ODR) platform.


In a complementary role, AI augments the capabilities of the human mediator. The technology currently in use is based on large language models (LLMs) which are designed to generate output ‘in response’ to input of the user. This output is derived using ‘data mining’ and ‘scraping’ algorithms which search readily available datasets and downloadable content. Examples of systems in use are:


1. Decision support systems, which allow for the consideration of various outcomes based on a given set of factors (input by the mediator); and


2. Knowledge support systems which generate case law and statutory provisions relevant to the situation at hand.


AI as the Third Party Mediator


While much of the technology which would allow AI to substitute the role of a human mediator is still in the theoretical stages, there are certain models which have begun to take on this role, such as:


1. Case reasoning systems - similar to knowledge support systems that examine available case law. However, taking it a step further, it also analyses the success and validity of these cases and apply it to the facts of the dispute. In doing so, they are able to ‘acknowledge’ that certain courses of action are less preferable where they have led to a negative outcome and should therefore be avoided.

 

2. Rule-based systems - through decision-tree probability-based reasoning these systems consider the likely outcome of a course of action based on the variables specific to each case. From this, it is also able to generate the most preferable course of action.


3. Intelligent interface systems – These systems use natural language processing to account for the nuances which are present in human communication – allowing AI to ‘read between the lines’. This technology is largely theoretical; however, it will bridge the communication gap between human users and AI systems.


Potential Risks of AI Mediation


As stated above, AI predominantly has a complementary role, however, as the technology develops, the potential for independent use is brought closer to reality. . As with any powerful tool, the integration of AI in mediation brings its own set of risks.


1. Mediation on a larger scale


  • AI systems trained on large volume of mediation data can generate insights based on empirical evidence rather than subjective judgment. These insights may help mediators understand common trends, predict potential outcomes, and provide relevant case studies to the parties involved. When designed and implemented carefully, AI can act as a neutral party that minimises the risk of human bias in decision-making.  


  • However, a serious risk is the potential for inaccuracies in the generated output. LLMs are prone to ‘hallucinations’, output which is completely irrelevant to the facts at hand. Chat GPT, for example, has been known to generate fictitious cases completed with full citations. 


2. Neutral third party


  • Considering the role of more advanced AI models as a ‘third-party’ mediator introduces a truly independent and neutral facilitator to the process. This is an aspect of AI which is particularly appealing as it removes the presence and potential impact of human bias; having a neutral mediator can assist in building trust between the parties.


  • AI is limited in its ability to understand complex human emotions and nuanced interpersonal dynamics. Conflicts often involve subtleties that require empathy and deep contextual understanding. AI may misinterpret emotional cues, resulting in responses that could escalate tensions instead of resolving them.


3. Scalability and Accessibility


  • AI can make mediation more accessible, especially for underserved or remote populations. By reducing the need for face-to-face meetings, AI-driven mediation platforms can offer affordable and scalable solutions, expanding the reach of mediation services to people who may otherwise struggle to access them.


  • However, as with all newly developed technology, the initial cost of access may delay in the shift of any power imbalance.


4. Regulation of AI


  • The regulation of AI is somewhat ambiguous. In taking a pro-innovation approach, the Government has avoided setting clear guidance regarding the development and use of AI. Therefore, ethical issues which arise from the use of AI, namely liability, consent and data privacy, are not adequately regulated.


  • Overreliance on AI could diminish the role of the mediator as a neutral facilitator, with parties potentially feeling pressured to accept AI-generated outcomes. This could undermine the voluntary nature of mediation, where participants are encouraged to reach mutually agreeable solutions rather than adhere to an imposed outcome.


Mitigating risks and maximising benefits


Currently, the available technology allows for a mutual relationship between AI and mediator. Whether this is the most ideal use of the technology cannot be determined until independent AI mediation is achieved. Until such a time, several strategies can be deployed to strike a balance between leveraging the benefits and mitigating the risks:


1. Human-AI hybrid models


  • A collaborative approach, where AI assists human mediators rather than replacing them, can combine the best of both worlds. AI can handle administrative and analytical tasks, while the human mediator provides empathy, context and oversight. This model allows for a more flexible and holistic mediation process.


2. Training legal professionals, mediators, and analysts to correctly use and check the output generated by AI, with a view of avoiding the risks presented by potential inaccuracies.


3.  Introducing clear regulations regarding the ethical use of AI to mitigate risks associated with the deviation from ethical standards.


AI holds the potential to enhance mediation by offering efficiency, impartiality and accessibility. However, its application in this sensitive field must be carefully managed to avoid compromising the human aspect essential to successful conflict resolution. By adopting a hybrid approach that combines AI with skilled human mediators, setting up safeguards against bias, and ensuring robust data protection, we can maximise the benefits of AI while respecting the integrity of the mediation process. In doing so, AI can act as a valuable tool in the mediator’s arsenal, helping resolve conflicts more effectively in an increasingly digital world.


If you have any queries about AI Mediation, please do not hesitate to get in touch by telephone on 0207 052 3545 or by email info@kaurmaxwell.com


This article is for general information only. Its content is not a statement of the law on any subject and does not constitute advice.


Please contact KaurMaxwell for advice before taking any action in reliance on it.